Skip to content

feat: add RedisStore, SQLiteStore, and wire store into Mpp.create()#98

Open
brendanjryan wants to merge 5 commits intomainfrom
feat/stores
Open

feat: add RedisStore, SQLiteStore, and wire store into Mpp.create()#98
brendanjryan wants to merge 5 commits intomainfrom
feat/stores

Conversation

@brendanjryan
Copy link
Collaborator

@brendanjryan brendanjryan commented Mar 24, 2026

Summary

Adds two production-ready store backends and wires the store= parameter into Mpp.create()

Changes

New store backends (mpp.stores):

  • RedisStore — for multi-instance production deployments (Redis/Valkey). Uses SET NX EX for atomic put_if_absent. Install with pip install pympp[redis].
  • SQLiteStore — zero-infra production store using Python's built-in sqlite3 via aiosqlite. Uses INSERT OR IGNORE for atomic put_if_absent with lazy TTL expiry. Install with pip install pympp[sqlite].

Store wiring:

  • Mpp.create(store=...) and Mpp(store=...) automatically inject the store into intents that accept one (e.g., ChargeIntent). If an intent already has a store configured, it is not overwritten.

Optional dependencies in pyproject.toml:

  • pympp[redis]redis>=5.0
  • pympp[sqlite]aiosqlite>=0.20

Usage

from mpp.server import Mpp
from mpp.methods.tempo import tempo, ChargeIntent
from mpp.stores import RedisStore  # or SQLiteStore

# Redis
from redis.asyncio import from_url
store = RedisStore(await from_url("redis://localhost:6379"))

# SQLite (zero-infra alternative)
# store = await SQLiteStore.create("mpp.db")

server = Mpp.create(
    method=tempo(
        intents={"charge": ChargeIntent()},
        recipient="0x...",
    ),
    store=store,  # automatically wired into ChargeIntent
)

brendanjryan and others added 2 commits March 23, 2026 20:14
- RedisStore: production-ready store using redis-py (SET NX EX for atomic
  put_if_absent, configurable TTL and key prefix)
- SQLiteStore: zero-infra production store using aiosqlite (INSERT OR IGNORE
  for atomic put_if_absent, lazy TTL expiry)
- Mpp.create(store=...) and Mpp(store=...) automatically inject the store
  into intents that have a _store attribute (e.g. ChargeIntent)
- New optional deps: pympp[redis], pympp[sqlite]
- 27 new tests covering all store backends and wiring behavior
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b9cf62d2fd

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

"""
now = time.time()
await self._db.execute(
"DELETE FROM kv WHERE key = ? AND expires_at <= ?", (key, now)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Purge expired SQLite keys globally before inserts

put_if_absent only removes expired rows for the current key, but charge replay keys are transaction hashes and are almost always unique, so old rows are never revisited and never deleted. In production this makes TTL ineffective for storage reclamation, causing unbounded kv growth over time and eventual write/latency issues on long-lived servers. Expiry cleanup needs to target all expired rows (or run an equivalent global/batched vacuum path), not just the incoming key.

Useful? React with 👍 / 👎.

- Add redis service to docker-compose.yml
- 10 integration tests: CRUD, TTL verification, atomicity of
  put_if_absent under concurrent access, key prefix isolation
- Skipped automatically when REDIS_URL is not set
- Run with: docker compose up -d redis && REDIS_URL=redis://localhost:6379 uv run pytest -m redis -v
@brendanjryan brendanjryan requested a review from Slokh March 24, 2026 03:33
@brendanjryan brendanjryan added cyclops Trigger Cyclops PR audit and removed cyclops Trigger Cyclops PR audit labels Mar 25, 2026
Copy link

@tempoxyz-bot tempoxyz-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👁️ Cyclops Review

This PR adds RedisStore and SQLiteStore as production replay-protection backends with automatic wiring via Mpp.create(store=...). The store implementations are clean and atomic, but the default 300-second TTL fundamentally undermines the replay protection invariant.


🚨 [SECURITY] _verify_transaction Never Records Hash in Store — Cross-Payload Replay Bypass

Severity: High
File: src/mpp/methods/tempo/intents.py:345-446 (not in diff — body only)

Replay protection via store.put_if_absent() is only enforced in _verify_hash(). When a user pays via a raw signed transaction (type: "transaction"), _verify_transaction() broadcasts it, confirms the receipt, and returns Receipt.success(tx_hash) — but never writes the resulting hash to the store. An attacker can pay once via the transaction path, observe the mined tx hash, then replay it via the hash path (type: "hash") to get a second free access.

Recommended Fix: In _verify_transaction, after confirming the receipt, record the tx hash:

if self._store is not None:
    store_key = f"mpp:charge:{tx_hash.lower()}"
    await self._store.put_if_absent(store_key, tx_hash)

Reviewer Callouts
  • Store TTL Semantics Contract: The Store protocol has no concept of expiration, so ChargeIntent cannot distinguish durable vs. expiring stores. Replay safety depends on long-lived retention, not just atomicity.
  • Replay Protection Scope: Current replay protection assumes all payments flow through _verify_hash. The _verify_transaction path must participate in the same cache lifecycle.
  • Existing Test Gap: Replay tests (tests/test_tempo.py:280-330) only validate immediate duplicate rejection, not post-expiry reuse or cross-payload replay.

client: Any,
*,
key_prefix: str = "mpp:",
ttl_seconds: int = 300,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 [SECURITY] Default Store TTL Re-enables Transaction Hash Replay After 5 Minutes

Both new production stores expire replay keys after 300 seconds by default. ChargeIntent._verify_hash() treats store.put_if_absent() as the one-time-spend source of truth but never validates transaction age from on-chain data. After TTL expiry, an attacker requests a fresh challenge and resubmits an old tx hash — the expired key is treated as new, the on-chain receipt still validates, and the server grants access again. A one-time payment becomes a renewable 5-minute lease.

Recommended Fix: Do not expire tx-hash replay entries by default. Use non-expiring keys (or a very long operator-chosen retention). If bounded retention is desired, first add an invariant that rejects transactions older than the TTL window based on block timestamp.

Returns ``True`` if the key was new, ``False`` if it existed.
"""
now = time.time()
await self._db.execute("DELETE FROM kv WHERE key = ? AND expires_at <= ?", (key, now))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [ISSUE] Lazy Pruning Only Targets Current Key — Unbounded Disk Growth

This DELETE is scoped to WHERE key = ?, so it only prunes expired rows for the key being inserted. In a payment system, tx hashes are unique and almost never queried again — expired rows for one-time keys accumulate indefinitely, growing the SQLite file without bound.

Recommended Fix: Periodically prune all expired rows (e.g., DELETE FROM kv WHERE expires_at <= ? without the key filter, rate-limited to run once per N inserts or on a background timer).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cyclops Trigger Cyclops PR audit

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants